Goto

Collaborating Authors

 hawke process


A Dirichlet Mixture Model of Hawkes Processes for Event Sequence Clustering

Neural Information Processing Systems

How to cluster event sequences generated via different point processes is an interesting and important problem in statistical machine learning. To solve this problem, we propose and discuss an effective model-based clustering method based on a novel Dirichlet mixture model of a special but significant type of point processes --- Hawkes process. The proposed model generates the event sequences with different clusters from the Hawkes processes with different parameters, and uses a Dirichlet process as the prior distribution of the clusters. We prove the identifiability of our mixture model and propose an effective variational Bayesian inference algorithm to learn our model. An adaptive inner iteration allocation strategy is designed to accelerate the convergence of our algorithm. Moreover, we investigate the sample complexity and the computational complexity of our learning algorithm in depth. Experiments on both synthetic and real-world data show that the clustering method based on our model can learn structural triggering patterns hidden in asynchronous event sequences robustly and achieve superior performance on clustering purity and consistency compared to existing methods.




Thinning for Accelerating the Learning of Point Processes

Tianbo Li, Yiping Ke

Neural Information Processing Systems

This paper discusses one of the most fundamental issues about point processes that what is the best sampling method for point processes. We propose thinning as a downsampling method for accelerating the learning of point processes.


Learning Latent Process from High-Dimensional Event Sequences via Efficient Sampling

Qitian Wu, Zixuan Zhang, Xiaofeng Gao, Junchi Yan, Guihai Chen

Neural Information Processing Systems

There are plenty of previous studies targeting the problem from different aspects. For temporal point process, agreat number of works [3, 13, 15, 16, 28] attempt to model the intensify function from statistic views, and recent studies harness deep recurrent model [24], generative adversarial network [23] and reinforcement learning [19, 18] to learn the temporal process. These researches mainly focus on one-dimension eventsequences where eacheventpossesses thesame marker.





59b1deff341edb0b76ace57820cef237-AuthorFeedback.pdf

Neural Information Processing Systems

Indeed, the results in Table 1, which shows13 the mean absolute percentage errors (MAPE), demonstrates this. The ac-14 curacy of neural ODE for the Poisson process is on par with our neural15 JSDE. However, for the Hawkes process (Exponential), Hawkes process16 (Power-Law), and self-correcting process, neural ODE gives much larger17 predictions errors. Forthesocial/medicaldatasets,weuseda20/64-24 dimensional latent state and parameterized the functions with two-hidden-layer MLPs with 32/64 hidden units. The time series modeling software that we used is designed for long event sequences and ignores the idle time after31 thelastevent.